Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving. Robust training and defend-by-denoise are typical strategies for defending adversarial perturbations, including adversarial training and statistical filtering, respectively. However, they either induce massive computational overhead or rely heavily upon specified noise priors, limiting generalized robustness against attacks of all kinds. This paper introduces a new defense mechanism based on denoising diffusion models that can adaptively remove diverse noises with a tailored intensity estimator. Specifically, we first estimate adversarial distortions by calculating the distance of the points to their neighborhood best-fit plane. Depending on the distortion degree, we choose specific diffusion time steps for the input point cloud and perform the forward diffusion to disrupt potential adversarial shifts. Then we conduct the reverse denoising process to restore the disrupted point cloud back to a clean distribution. This approach enables effective defense against adaptive attacks with varying noise budgets, achieving accentuated robustness of existing 3D deep recognition models.
translated by 谷歌翻译
Faced with the threat of identity leakage during voice data publishing, users are engaged in a privacy-utility dilemma when enjoying convenient voice services. Existing studies employ direct modification or text-based re-synthesis to de-identify users' voices, but resulting in inconsistent audibility in the presence of human participants. In this paper, we propose a voice de-identification system, which uses adversarial examples to balance the privacy and utility of voice services. Instead of typical additive examples inducing perceivable distortions, we design a novel convolutional adversarial example that modulates perturbations into real-world room impulse responses. Benefit from this, our system could preserve user identity from exposure by Automatic Speaker Identification (ASI) while remaining the voice perceptual quality for non-intrusive de-identification. Moreover, our system learns a compact speaker distribution through a conditional variational auto-encoder to sample diverse target embeddings on demand. Combining diverse target generation and input-specific perturbation construction, our system enables any-to-any identify transformation for adaptive de-identification. Experimental results show that our system could achieve 98% and 79% successful de-identification on mainstream ASIs and commercial systems with an objective Mel cepstral distortion of 4.31dB and a subjective mean opinion score of 4.48.
translated by 谷歌翻译
尽管在各种应用中取得了突出的性能,但点云识别模型经常遭受自然腐败和对抗性扰动的困扰。在本文中,我们深入研究了点云识别模型的一般鲁棒性,并提出了点云对比对抗训练(PointCat)。 PointCat的主要直觉是鼓励目标识别模型缩小清洁点云和损坏点云之间的决策差距。具体而言,我们利用有监督的对比损失来促进识别模型提取的超晶体特征的对齐和均匀性,并设计一对带有动态原型指南的集中式损失,以避免这些特征与其属于其属于其归属类别群的偏离。为了提供更具挑战性的损坏点云,我们对噪声生成器以及从头开始的识别模型进行了对手训练,而不是将基于梯度的攻击用作内部循环,例如以前的对手训练方法。全面的实验表明,在包括各种损坏的情况下,所提出的PointCat优于基线方法,并显着提高不同点云识别模型的稳健性,包括各向同性点噪声,LIDAR模拟的噪声,随机点掉落和对抗性扰动。
translated by 谷歌翻译
本文研究了通过机器学习模型估计特征对特定实例预测的贡献的问题,以及功能对模型的总体贡献。特征(变量)对预测结果的因果效应反映了该特征对预测的贡献。一个挑战是,如果没有已知的因果图,就无法从数据中估算大多数现有的因果效应。在本文中,我们根据假设的理想实验定义了解释性因果效应。该定义给不可知论的解释带来了一些好处。首先,解释是透明的,具有因果关系。其次,解释性因果效应估计可以数据驱动。第三,因果效应既提供了特定预测的局部解释,又提供了一个全局解释,显示了一个特征在预测模型中的总体重要性。我们进一步提出了一种基于解释性因果效应来解释的方法和组合变量的方法。我们显示了对某些现实世界数据集的实验的定义和方法。
translated by 谷歌翻译
近年来,人们对少量知识图(FKGC)的兴趣日益增加,该图表旨在推断出关于该关系的一些参考三元组,从而推断出不见了的查询三倍。现有FKGC方法的主要重点在于学习关系表示,可以反映查询和参考三元组共享的共同信息。为此,这些方法从头部和尾部实体的直接邻居中学习实体对表示,然后汇总参考实体对的表示。但是,只有从直接邻居那里学到的实体对代表可能具有较低的表现力,当参与实体稀疏直接邻居或与其他实体共享一个共同的当地社区。此外,仅仅对头部和尾部实体的语义信息进行建模不足以准确推断其关系信息,尤其是当它们具有多个关系时。为了解决这些问题,我们提出了一个特定于关系的上下文学习(RSCL)框架,该框架利用了三元组的图形上下文,以学习全球和本地关系特定的表示形式,以使其几乎没有相关关系。具体而言,我们首先提取每个三倍的图形上下文,这可以提供长期实体关系依赖性。为了编码提取的图形上下文,我们提出了一个分层注意网络,以捕获三元组的上下文信息并突出显示实体的有价值的本地邻里信息。最后,我们设计了一个混合注意聚合器,以评估全球和本地级别的查询三元组的可能性。两个公共数据集的实验结果表明,RSCL的表现优于最先进的FKGC方法。
translated by 谷歌翻译
本地到全球学习方法在贝叶斯网络(BN)结构学习中起着重要作用。现有的本地到全局学习算法首先通过在数据集中学习每个变量的MB(马尔可夫毯子)或PC(家长和儿童)来构建DAG(Markov毯子)或PC(父母和儿童),然后在骨架中定向边缘。然而,现有的MB或PC学习方法通​​常是昂贵的昂贵昂贵,特别是具有大型BN,导致局部到全局学习算法效率低下。为了解决问题,在本文中,我们使用特征选择开发了一个有效的本地到全局学习方法。具体地,我们首先分析众所周知的最小冗余和最大相关性(MRMR)特征选择方法的基本原理,用于学习变量的PC集。基于分析,我们提出了一种高效的F2SL(基于特征选择的结构学习)方法,以局部 - 全局BN结构学习。 F2SL方法首先采用MRMR方法来学习DAG骨架,然后在骨架中的边缘。采用独立测试或进行定向边缘的分数函数,我们将F2SL方法实例化为两个新算法,F2SL-C(使用独立测试)和F2SL-S(使用得分函数)。与最先进的本地到全局BN学习算法相比,实验验证了本文中所提出的算法比比较算法更有效,提供竞争性结构学习质量。
translated by 谷歌翻译
分布式培训已成为培训大型神经网络(NN)模型的普遍性和有效的方法,该模型加工大规模数据。然而,满足来自各种NN模型,多样化计算资源的要求以及在培训工作期间的动态变化是非常挑战的。在这项研究中,我们在系统的端到端视图中设计了我们的分布式训练框架,以提供不同场景的内置自适应能力,特别是对于工业应用和生产环境,通过完全考虑资源分配,模型分区,任务放置和分布式执行。基于统一的分布式图和统一群集对象,我们的自适应框架配备了全球成本模型和全局计划者,可以实现任意并行,资源感知的放置,多模式执行,容错和弹性分布式。训练。实验表明,我们的框架可以满足应用程序的多样性和资源的异质性满足各种要求和具有竞争力的性能。具有260亿参数的Ernie语言模型在数千个AI处理器上有效地培训,可扩展性较弱的91.7%。通过采用异质管道异步执行,从推荐系统的模型的吞吐量可以分别增加到2.1倍,仅增加了GPU和CPU培训的3.3倍。此外,容错和弹性分布式培训已成功应用于在线工业应用,这减少了长期培训工作的数量,增加了34.49%,并在全球调度效率增加了33.91%生产环境。
translated by 谷歌翻译
We address the challenge of recovering an underlying scene geometry and colors from a sparse set of RGBD view observations. In this work, we present a new solution that sequentially generates novel RGBD views along a camera trajectory, and the scene geometry is simply the fusion result of these views. More specifically, we maintain an intermediate surface mesh used for rendering new RGBD views, which subsequently becomes complete by an inpainting network; each rendered RGBD view is later back-projected as a partial surface and is supplemented into the intermediate mesh. The use of intermediate mesh and camera projection helps solve the refractory problem of multi-view inconsistency. We practically implement the RGBD inpainting network as a versatile RGBD diffusion model, which is previously used for 2D generative modeling; we make a modification to its reverse diffusion process to enable our use. We evaluate our approach on the task of 3D scene synthesis from sparse RGBD inputs; extensive experiments on the ScanNet dataset demonstrate the superiority of our approach over existing ones. Project page: https://jblei.site/project-pages/rgbd-diffusion.html
translated by 谷歌翻译
Facial attractiveness prediction (FAP) aims to assess the facial attractiveness automatically based on human aesthetic perception. Previous methods using deep convolutional neural networks have boosted the performance, but their giant models lead to a deficiency in flexibility. Besides, most of them fail to take full advantage of the dataset. In this paper, we present a novel end-to-end FAP approach integrating dual label distribution and lightweight design. To make the best use of the dataset, the manual ratings, attractiveness score, and standard deviation are aggregated explicitly to construct a dual label distribution, including the attractiveness distribution and the rating distribution. Such distributions, as well as the attractiveness score, are optimized under a joint learning framework based on the label distribution learning (LDL) paradigm. As for the lightweight design, the data processing is simplified to minimum, and MobileNetV2 is selected as our backbone. Extensive experiments are conducted on two benchmark datasets, where our approach achieves promising results and succeeds in striking a balance between performance and efficiency. Ablation studies demonstrate that our delicately designed learning modules are indispensable and correlated. Additionally, the visualization indicates that our approach is capable of perceiving facial attractiveness and capturing attractive facial regions to facilitate semantic predictions.
translated by 谷歌翻译
Neural networks are susceptible to data inference attacks such as the membership inference attack, the adversarial model inversion attack and the attribute inference attack, where the attacker could infer useful information such as the membership, the reconstruction or the sensitive attributes of a data sample from the confidence scores predicted by the target classifier. In this paper, we propose a method, namely PURIFIER, to defend against membership inference attacks. It transforms the confidence score vectors predicted by the target classifier and makes purified confidence scores indistinguishable in individual shape, statistical distribution and prediction label between members and non-members. The experimental results show that PURIFIER helps defend membership inference attacks with high effectiveness and efficiency, outperforming previous defense methods, and also incurs negligible utility loss. Besides, our further experiments show that PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks. For example, the inversion error is raised about 4+ times on the Facescrub530 classifier, and the attribute inference accuracy drops significantly when PURIFIER is deployed in our experiment.
translated by 谷歌翻译